skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Chen, Linlin"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Traditional workload analysis uses discrete times measured by data accesses. An example is the classic independent reference model (IRM). Effective solutions have been developed to model workloads with stochastic access patterns, but they incur a high cost for Zipfian workloads, which may contain millions of items each accessed with a different frequency. This paper first presents a continuous-time model of locality for workloads with stochastic access patterns. It shows that two previous techniques by Dan and Towsley and by Denning and Schwartz can be interpreted as a single model using different discrete times. Using continuous time, it derives a closed-form solution for an item and a general solution that is a differentiable function. In addition, the paper presents an approximation technique by grouping items into partitions. When evaluated using Zipfian workloads, it shows that a workload with millions of items can be approximated using a small number of partitions, and the continuous-time model has greater accuracy and is faster to compute numerically. For the largest data size verifiable using trace generation and simulation, the new techniques reduce the time of locality analysis by 6 orders of magnitude. 
    more » « less
    Free, publicly-accessible full text available July 2, 2026
  2. Cache management is important in exploiting locality and reducing data movement. This article studies a new type of programmable cache called the lease cache. By assigning leases, software exerts the primary control on when and how long data stays in the cache. Previous work has shown an optimal solution for an ideal lease cache. This article develops and evaluates a set of practical solutions for a physical lease cache emulated in FPGA with the full suite of PolyBench benchmarks. Compared to automatic caching, lease programming can further reduce data movement by 10% to over 60% when the data size is 16 times to 3,000 times the cache size, and the techniques in this article realize over 80% of this potential. Moreover, lease programming can reduce data movement by another 0.8% to 20% after polyhedral locality optimization. 
    more » « less
  3. Cache management is important in exploiting locality and reducing data movement. This paper studies a new type of programmable cache called the lease cache. By assigning leases, software exerts the primary control on when and how long data stays in the cache. Previous work has shown an optimal solution for an ideal lease cache. This paper develops and evaluates a set of practical solutions for a physical lease cache emulated in FPGA with the full suite of PolyBench benchmarks. Compared to automatic caching, lease programming can further reduce data movement by 10% to over 60% when the data size is 16 times to 3,000 times the cache size, and the techniques in this paper realize over 80% of this potential. Moreover, lease programming can reduce data movement by another 0.8% to 20% after polyhedral locality optimization. 
    more » « less